feedback code
Deepcode: Feedback Codes via Deep Learning
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly beats state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability. We also demonstrate several desirable properties in the codes: (a) generalization to larger block lengths; (b) composability with known codes; (c) adaptation to practical constraints. This result also presents broader ramifications to coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel specific information-theoretic insights, can potentially beat state-of-the-art codes, constructed over decades of mathematical research.
Feedback Lunch: Deep Feedback Codes for Wiretap Channels
Zhou, Yingyao, Devroye, Natasha, Günlü, Onur
Abstract--We consider reversely-degraded wiretap channels, for which the secrecy capacity is zero if there is no channel feedback. This work focuses on a seeded modular code design for the Gaussian wiretap channel with channel output feedback, combining universal hash functions for security and learned feedback-based codes for reliability to achieve positive secrecy rates. We study the trade-off between communication reliability and information leakage, illustrating that feedback enables agreeing on a secret key shared between legitimate parties, overcoming the security advantage of the wiretapper . Our findings also motivate code designs for sensing-assisted secure communication, to be used in next-generation integrated sensing and communication methods. ECURE communication can be achieved through key-based cryptography, which relies on securely-shared secret keys, and physical-layer security (PLS), which leverages the physical properties of the channel to ensure data confidentiality [1], [2].
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Sweden (0.04)
- Europe > Germany > North Rhine-Westphalia > Arnsberg Region > Dortmund (0.04)
Reviews: Deepcode: Feedback Codes via Deep Learning
The formal noisy channel setting is similar to a standard autoencoder framework, with a few key differences. For one, we usually encode and transmit one bit of a message at time due to channel limits, and second we get feedback, usually in the form of a noisy version of each encoded bit. Due to the sequential nature of the problem, plus the availability of feedback, the authors apply an RNN architecture. The input to the decoder at each step is the next bit to encode plus an estimate of the noise from previous steps (derived from the difference between the encoded message and the received feedback). Experiments suggest that this approach significantly outperforms existing approaches.
DEEP-IoT: Downlink-Enhanced Efficient-Power Internet of Things
At the heart of the Internet of Things (IoT) -- a domain witnessing explosive growth -- the imperative for energy efficiency and the extension of device lifespans has never been more pressing. This paper presents DEEP-IoT, a revolutionary communication paradigm poised to redefine how IoT devices communicate. Through a pioneering "listen more, transmit less" strategy, DEEP-IoT challenges and transforms the traditional transmitter (IoT devices)-centric communication model to one where the receiver (the access point) play a pivotal role, thereby cutting down energy use and boosting device longevity. We not only conceptualize DEEP-IoT but also actualize it by integrating deep learning-enhanced feedback channel codes within a narrow-band system. Simulation results show a significant enhancement in the operational lifespan of IoT cells -- surpassing traditional systems using Turbo and Polar codes by up to 52.71%. This leap signifies a paradigm shift in IoT communications, setting the stage for a future where IoT devices boast unprecedented efficiency and durability.
- Energy (1.00)
- Information Technology > Smart Houses & Appliances (0.85)
Robust Non-Linear Feedback Coding via Power-Constrained Deep Learning
Kim, Junghoon, Kim, Taejoon, Love, David, Brinton, Christopher
The design of codes for feedback-enabled communications has been a long-standing open problem. Recent research on non-linear, deep learning-based coding schemes have demonstrated significant improvements in communication reliability over linear codes, but are still vulnerable to the presence of forward and feedback noise over the channel. In this paper, we develop a new family of non-linear feedback codes that greatly enhance robustness to channel noise. Our autoencoder-based architecture is designed to learn codes based on consecutive blocks of bits, which obtains de-noising advantages over bit-by-bit processing to help overcome the physical separation between the encoder and decoder over a noisy channel. Moreover, we develop a power control layer at the encoder to explicitly incorporate hardware constraints into the learning optimization, and prove that the resulting average power constraint is satisfied asymptotically. Numerical experiments demonstrate that our scheme outperforms state-of-the-art feedback codes by wide margins over practical forward and feedback noise regimes, and provide information-theoretic insights on the behavior of our non-linear codes. Moreover, we observe that, in a long blocklength regime, canonical error correction codes are still preferable to feedback codes when the feedback noise becomes high.
- North America > United States > Kansas > Douglas County > Lawrence (0.14)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
AttentionCode: Ultra-Reliable Feedback Codes for Short-Packet Communications
Shao, Yulin, Ozfatura, Emre, Perotti, Alberto, Popovic, Branislav, Gunduz, Deniz
Ultra-reliable short-packet communication is a major challenge in future wireless networks with critical applications. To achieve ultra-reliable communications beyond 99.999%, this paper envisions a new interaction-based communication paradigm that exploits feedback from the receiver. We present AttentionCode, a new class of feedback codes leveraging deep learning (DL) technologies. The underpinnings of AttentionCode are three architectural innovations: AttentionNet, input restructuring, and adaptation to fading channels, accompanied by several training methods, including large-batch training, distributed learning, look-ahead optimizer, training-test signal-to-noise ratio (SNR) mismatch, and curriculum learning. The training methods can potentially be generalized to other wireless communication applications with machine learning. Numerical experiments verify that AttentionCode establishes a new state of the art among all DL-based feedback codes in both additive white Gaussian noise (AWGN) channels and fading channels. In AWGN channels with noiseless feedback, for example, AttentionCode achieves a block error rate (BLER) of $10^{-7}$ when the forward channel SNR is 0 dB for a block size of 50 bits, demonstrating the potential of AttentionCode to provide ultra-reliable short-packet communications.
- Europe > Sweden (0.04)
- North America > United States > Texas (0.04)
- Europe > United Kingdom (0.04)
- Energy (0.67)
- Telecommunications (0.46)
- Semiconductors & Electronics (0.46)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
Feedback is Good, Active Feedback is Better: Block Attention Active Feedback Codes
Ozfatura, Emre, Shao, Yulin, Ghazanfari, Amin, Perotti, Alberto, Popovic, Branislav, Gunduz, Deniz
Deep neural network (DNN)-assisted channel coding designs, such as low-complexity neural decoders for existing codes, or end-to-end neural-network-based auto-encoder designs are gaining interest recently due to their improved performance and flexibility; particularly for communication scenarios in which high-performing structured code designs do not exist. Communication in the presence of feedback is one such communication scenario, and practical code design for feedback channels has remained an open challenge in coding theory for many decades. Recently, DNN-based designs have shown impressive results in exploiting feedback. In particular, generalized block attention feedback (GBAF) codes, which utilizes the popular transformer architecture, achieved significant improvement in terms of the block error rate (BLER) performance. However, previous works have focused mainly on passive feedback, where the transmitter observes a noisy version of the signal at the receiver. In this work, we show that GBAF codes can also be used for channels with active feedback. We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially, and achieve a new state-of-the-art BLER performance, especially in the low SNR regime.
- Europe > Sweden (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
All you need is feedback: Communication with block attention feedback codes
Ozfatura, Emre, Shao, Yulin, Perotti, Alberto, Popovic, Branislav, Gunduz, Deniz
Deep neural network (DNN)-based channel code designs have recently gained interest as an alternative to conventional coding schemes, particularly for channels where existing codes do not provide satisfactory performance. Coding in the presence of feedback is one such problem, for which promising results have recently been obtained by various DNN-based coding architectures. In this paper, we introduce a novel learning-aided feedback code design, dubbed generalized block attention feedback (GBAF) codes, that achieves order-of-magnitude improvements in block error rate (BLER) compared to existing solutions. Sequence-to-sequence encoding and block-by-block processing of the message bits in GBAF codes not only reduce the communication overhead due to reduced number of interactions between the transmitter and receiver, but also enable flexible coding rates. More importantly, GBAF codes provide a modular structure that can be implemented using different neural network architectures. In this work, we employ the transformer architecture, which outperforms all the prior DNN-based code designs in terms the block error rate in the low signal-to-noise ratio regime when the feedback channel is noiseless. Reliable communication in the presence of noise has been a long-standing challenge. E. Ozfatura, Y. Shao and D. Gündüz are with Information Processing and Communications Lab, Department of Electrical and Electronic Engineering, Imperial College London. Information storage and communication are two core technologies that underpin the information age, and the success of both hinges on error correction codes, such as BCH, Reed-Muller, convolution, turbo, low-density parity-check (LDPC), and polar codes. While these codes can approach the fundamental Shannon capacity limit over an additive white Gaussian noise (AWGN) channel in the large blocklength regime, there are many scenarios where we do not have practical codes that approach the fundamental theoretical boundaries. Coding in the presence of feedback is one such challenging, yet practical scenario. The classical feedback channel model was introduced and studied by Shannon [1].
Deepcode: Feedback Codes via Deep Learning
Kim, Hyeji, Jiang, Yihan, Kannan, Sreeram, Oh, Sewoong, Viswanath, Pramod
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide- ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly beats state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability. We also demonstrate several desirable properties in the codes: (a) generalization to larger block lengths; (b) composability with known codes; (c) adaptation to practical constraints.
Deepcode: Feedback Codes via Deep Learning
Kim, Hyeji, Jiang, Yihan, Kannan, Sreeram, Oh, Sewoong, Viswanath, Pramod
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly beats state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability. We also demonstrate several desirable properties of the codes: (a) generalization to larger block lengths, (b) composability with known codes, (c) adaptation to practical constraints. This result also has broader ramifications for coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes constructed over decades of mathematical research.
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)